9 research outputs found

    Emotion Recognition from Skeletal Movements

    Get PDF
    Automatic emotion recognition has become an important trend in many artificial intelligence (AI) based applications and has been widely explored in recent years. Most research in the area of automated emotion recognition is based on facial expressions or speech signals. Although the influence of the emotional state on body movements is undeniable, this source of expression is still underestimated in automatic analysis. In this paper, we propose a novel method to recognise seven basic emotional states-namely, happy, sad, surprise, fear, anger, disgust and neutral-utilising body movement. We analyse motion capture data under seven basic emotional states recorded by professional actor/actresses using Microsoft Kinect v2 sensor. We propose a new representation of affective movements, based on sequences of body joints. The proposed algorithm creates a sequential model of affective movement based on low level features inferred from the spacial location and the orientation of joints within the tracked skeleton. In the experimental results, different deep neural networks were employed and compared to recognise the emotional state of the acquired motion sequences. The experimental results conducted in this work show the feasibility of automatic emotion recognition from sequences of body gestures, which can serve as an additional source of information in multimodal emotion recognition

    Survey on Emotional Body Gesture Recognition

    Get PDF
    Automatic emotion recognition has become a trending research topic in the past decade. While works based on facial expressions or speech abound, recognizing affect from body gestures remains a less explored topic. We present a new comprehensive survey hoping to boost research in the field. We first introduce emotional body gestures as a component of what is commonly known as "body language" and comment general aspects as gender differences and culture dependence. We then define a complete framework for automatic emotional body gesture recognition. We introduce person detection and comment static and dynamic body pose estimation methods both in RGB and 3D. We then comment the recent literature related to representation learning and emotion recognition from images of emotionally expressive gestures. We also discuss multi-modal approaches that combine speech or face with body gestures for improved emotion recognition. While pre-processing methodologies (e.g., human detection and pose estimation) are nowadays mature technologies fully developed for robust large scale analysis, we show that for emotion recognition the quantity of labelled data is scarce. There is no agreement on clearly defined output spaces and the representations are shallow and largely based on naive geometrical representations

    Dominant and Complementary Multi-Emotional Facial Expression Recognition Using C-Support Vector Classification

    Get PDF
    We are proposing a new facial expression recognition model which introduces 30+ detailed facial expressions recognisable by any artificial intelligence interacting with a human. Throughout this research, we introduce two categories for the emotions, namely, dominant emotions and complementary emotions. In this research paper the complementary emotion is recognised by using the eye region if the dominant emotion is angry, fearful or sad, and if the dominant emotion is disgust or happiness the complementary emotion is mainly conveyed by the mouth. In order to verify the tagged dominant and complementary emotions, randomly chosen people voted for the recognised multi-emotional facial expressions. The average results of voting are showing that 73.88% of the voters agree on the correctness of the recognised multi-emotional facial expressions

    Efficiency of chosen speech descriptors in relation to emotion recognition

    No full text
    This research paper presents parametrization of emotional speech using a pool of common features utilized in emotion recognition such as fundamental frequency, formants, energy, MFCC, PLP, and LPC coefficients. The pool is additionally expanded by perceptual coefficients such as BFCC, HFCC, RPLP, and RASTA PLP, which are used in speech recognition, but not applied in emotion detection. The main contribution of this work is the comparison of the accuracy performance of emotion detection for each feature type based on the results provided by both k-NN and SVM algorithms with 10-fold cross-validation. Analysis was performed on two different Polish emotional speech databases: voice performances by professional actors in comparison with the author's spontaneous speech

    Vocal-based emotion recognition using random forests and decision tree

    No full text
    This paper proposes a new vocal-based emotion recognition method using random forests, where pairs of the features on the whole speech signal, namely, pitch, intensity, the first four formants, the first four formants bandwidths, mean autocorrelation, mean noise-to-harmonics ratio and standard deviation, are used in order to recognize the emotional state of a speaker. The proposed technique adopts random forests to represent the speech signals, along with the decision-trees approach, in order to classify them into different categories. The emotions are broadly categorised into the six groups, which are happiness, fear, sadness, neutral, surprise, and disgust. The Surrey Audio-Visual Expressed Emotion database is used. According to the experimental results using leave-one-out cross-validation, by means of combining the most significant prosodic features, the proposed method has an average recognition rate of , and at the highest level, the recognition rate of has been obtained, which belongs to the happiness voice signals. The proposed method has higher average recognition rate and higher best recognition rate compared to the linear discriminant analysis as well as higher average recognition rate than the deep neural networks results, both of which have been implemented on the same database

    Supervised Vocal-Based Emotion Recognition Using Multiclass Support Vector Machine, Random Forests, and Adaboost

    No full text
    This paper investigates and compares three different classifiers-multi-class Support Vector Machine, Adaboost, and random forests-for the purpose of vocal emotion recognition. Additionally, the decisions of all classifiers are combined using majority voting. The proposed method has been applied to two different emotional databases, which are the Surrey Audio-Visual Expressed Emotion Database and the Polish Emotional Speech Database. Fourteen features, namely pitch, intensity, first through fourth formants and their bandwidths, mean autocorrelation, mean noise-to-harmonics ratio, mean harmonics-to-noise ratio, and standard deviation have been extracted from both databases. Best recognition rate on the Surrey Audio-Visual Expressed Emotion Database has been achieved for the random forest algorithm, which is 75.71%. On the Polish Emotional Speech Database the best recognition rate has been achieved by applying the Adaboost classifier, which is 87.5%. The achieved recognition rates are higher than presented by Yuncu et al. on the same databases (73.81% on the Surrey Audio-Visual Expressed Emotion Database and 71.30% on the Polish Emotional Speech database)

    Virtual Reality and Its Applications in Education: Survey

    No full text
    In the education process, students face problems with understanding due to the complexity, necessity of abstract thinking and concepts. More and more educational centres around the world have started to introduce powerful new technology-based tools that help meet the needs of the diverse student population. Over the last several years, virtual reality (VR) has moved from being the purview of gaming to professional development. It plays an important role in teaching process, providing an interesting and engaging way of acquiring information. What follows is an overview of the big trend, opportunities and concerns associated with VR in education. We present new opportunities in VR and put together the most interesting, recent virtual reality applications used in education in relation to several education areas such as general, engineering and health-related education. Additionally, this survey contributes by presenting methods for creating scenarios and different approaches for testing and validation. Lastly, we conclude and discuss future directions of VR and its potential to improve the learning experience

    The ATLAS Experiment at the CERN Large Hadron Collider

    No full text
    The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented
    corecore